Early evaluation of patients who require special care and who have high death-expectancy in COVID-19, and the effective determination of relevant biomarkers on large sample-groups are important to reduce mortality. This study aimed to reveal the routine blood-value predictors of COVID-19 mortality and to determine the lethal-risk levels of these predictors during the disease process. The dataset of the study consists of 38 routine blood-values of 2597 patients who died (n = 233) and those who recovered (n = 2364) from COVID-19 in August-December, 2021. In this study, the histogram-based gradient-boosting (HGB) model was the most successful machine-learning classifier in detecting living and deceased COVID-19 patients (with squared F1 metrics F1^2 = 1). The most efficient binary combinations with procalcitonin were obtained with D-dimer, ESR, D-Bil and ferritin. The HGB model operated with these feature pairs correctly detected almost all of the patients who survived and those who died (precision > 0.98, recall > 0.98, F1^2 > 0.98). Furthermore, in the HGB model operated with a single feature, the most efficient features were procalcitonin (F1^2 = 0.96) and ferritin (F1^2 = 0.91). In addition, according to the two-threshold approach, ferritin values between 376.2 mkg/L and 396.0 mkg/L (F1^2 = 0.91) and pro-calcitonin values between 0.2 mkg/L and 5.2 mkg/L (F1^2 = 0.95) were found to be fatal risk levels for COVID-19. Considering all the results, we suggest that many features combined with these features, especially procalcitonin and ferritin, operated with the HGB model, can be used to achieve very successful results in the classification of those who live, and those who die from COVID-19. Moreover, we strongly recommend that clinicians consider the critical levels we have found for procalcitonin and ferritin properties, to reduce the lethality of the COVID-19 disease.
translated by 谷歌翻译
Approximation of entropies of various types using machine learning (ML) regression methods are shown for the first time. The ML models presented in this study define the complexity of the short time series by approximating dissimilar entropy techniques such as Singular value decomposition entropy (SvdEn), Permutation entropy (PermEn), Sample entropy (SampEn) and Neural Network entropy (NNetEn) and their 2D analogies. A new method for calculating SvdEn2D, PermEn2D and SampEn2D for 2D images was tested using the technique of circular kernels. Training and testing datasets on the basis of Sentinel-2 images are presented (two training images and one hundred and ninety-eight testing images). The results of entropy approximation are demonstrated using the example of calculating the 2D entropy of Sentinel-2 images and R^2 metric evaluation. The applicability of the method for the short time series with a length from N = 5 to N = 113 elements is shown. A tendency for the R^2 metric to decrease with an increase in the length of the time series was found. For SvdEn entropy, the regression accuracy is R^2 > 0.99 for N = 5 and R^2 > 0.82 for N = 113. The best metrics were observed for the ML_SvdEn2D and ML_NNetEn2D models. The results of the study can be used for fundamental research of entropy approximations of various types using ML regression, as well as for accelerating entropy calculations in remote sensing. The versatility of the model is shown on a synthetic chaotic time series using Planck map and logistic map.
translated by 谷歌翻译
当人体的各种参数在日常生活中立即监测并与物联网(IoT)相连时,医疗保健数字化需要有效的人类传感器方法。特别是,用于迅速诊断COVID-19的机器学习(ML)传感器是医疗保健和环境援助生活(AAL)的物联网应用的一个重要案例(AAL)。通过各种诊断测试和成像结果确定Covid-19的感染状态是昂贵且耗时的。这项研究的目的是基于常规的血值(RBV)值,为诊断CoVID-19的快速,可靠和经济的替代工具提供了一种。该研究的数据集由总共5296例患者组成,具有相同数量的阴性和阳性Covid-19测试结果和51个常规血值。在这项研究中,13个流行的分类器机器学习模型和LogNnet神经网络模型被逐渐消失。在检测疾病的时间和准确性方面,最成功的分类器模型是基于直方图的梯度提升(HGB)。 HGB分类器确定了11个最重要的特征(LDL,胆固醇,HDL-C,MCHC,甘油三酸酯,淀粉酶,UA,LDH,CK-MB,ALP和MCH),以100%准确性检测该疾病,学习时间6.39秒。此外,讨论了这些特征在疾病诊断中的单,双重和三组合的重要性。我们建议将这11个特征及其组合用作诊断疾病的ML传感器的重要生物标志物,从而支持Arduino和云物联网服务上的边缘计算。
translated by 谷歌翻译
自2020年2月以来,世界一直在与Covid-19疾病进行激烈的斗争,随着疾病变成大流行,卫生系统受到悲惨的压力。这项研究的目的是使用对LogNNET储层神经网络的向后特征消除算法获得COVID-19的诊断和预后中最有效的常规血值(RBV)。该研究中的第一个数据集由5296例患者组成,具有相同数量的阴性和阳性COVID-19。 Lognnet模型在疾病诊断中的准确率为99.5%,其特征的精度为99.17%,只有平均红细胞血红蛋白浓度,平均性肌张力性血红蛋白和激活的部分凝血酶蛋白时间。第二个数据集由总共3899例COVID-19诊断为医院接受治疗的患者,其中203名患者是严重的患者,3696例患者是温和的患者。该模型以48个特征确定疾病预后的准确率达到94.4%,而仅红细胞沉降率,中性粒细胞计数和C反应性蛋白质特征,精度为82.7%。我们的方法将减少卫生部门的负压力,并帮助医生使用关键特征来了解Covid-19的发病机理。该方法有望在物联网中创建移动健康监控系统。
translated by 谷歌翻译
测量使用熵的时间序列的可预测性和复杂性是必不可少的工具去签名和控制非线性系统。然而,现有方法具有与熵对方法参数的强大依赖性相关的一些缺点。为了克服这些困难,本研究提出了一种使用LOGNNET神经网络模型估算时间序列熵的新方法。根据我们的算法,LognNet储库矩阵用时间序列元素填充。来自MNIST-10数据库的图像分类的准确性被认为是熵测量并由NNetEN表示。熵计算的新颖性是时间序列参与混合RES-ERVOIR中的输入信息。时间序列中的更大复杂性导致更高的分类精度和更高的Nneten值。我们介绍了一个新的时序序列特征,称为时间序列学习惯性,确定神经网络的学习率。该方法的鲁棒性和效率在混沌,周期性,随机,二进制和恒定时间序列上验证。 NNetEN与其他熵估计方法的比较表明,我们的方法更加稳健,准确,可广泛用于实践中。
translated by 谷歌翻译
在神经网络和物联网(IOT)时,寻找能够在有限的计算能力和小内存大小上运行的新神经网络架构成为紧急议程。为IOT应用程序设计合适的算法是一个重要任务。本文提出了一种馈送前向LognNet神经网络,它使用半线性Henon型离散混沌映射映射来分类MNIST-10数据集。该模型由储层部件和可培训分类器组成。储层部件的目的是使用特殊矩阵归档方法和混沌映射产生的时间序列来改变输入以最大化分类精度。使用随机移民的粒子群优化优化混沌图的参数。因此,与LognNet的原始版本相比,所提出的LognNet / HENON分类器具有更高的准确性和相同的RAM使用情况,并为IOT设备提供了有希望的实现机会。此外,证明了分类熵值与分类的准确性之间的直接关系。
translated by 谷歌翻译
Autonomous driving is an exciting new industry, posing important research questions. Within the perception module, 3D human pose estimation is an emerging technology, which can enable the autonomous vehicle to perceive and understand the subtle and complex behaviors of pedestrians. While hardware systems and sensors have dramatically improved over the decades -- with cars potentially boasting complex LiDAR and vision systems and with a growing expansion of the available body of dedicated datasets for this newly available information -- not much work has been done to harness these novel signals for the core problem of 3D human pose estimation. Our method, which we coin HUM3DIL (HUMan 3D from Images and LiDAR), efficiently makes use of these complementary signals, in a semi-supervised fashion and outperforms existing methods with a large margin. It is a fast and compact model for onboard deployment. Specifically, we embed LiDAR points into pixel-aligned multi-modal features, which we pass through a sequence of Transformer refinement stages. Quantitative experiments on the Waymo Open Dataset support these claims, where we achieve state-of-the-art results on the task of 3D pose estimation.
translated by 谷歌翻译
We introduce Structured 3D Features, a model based on a novel implicit 3D representation that pools pixel-aligned image features onto dense 3D points sampled from a parametric, statistical human mesh surface. The 3D points have associated semantics and can move freely in 3D space. This allows for optimal coverage of the person of interest, beyond just the body shape, which in turn, additionally helps modeling accessories, hair, and loose clothing. Owing to this, we present a complete 3D transformer-based attention framework which, given a single image of a person in an unconstrained pose, generates an animatable 3D reconstruction with albedo and illumination decomposition, as a result of a single end-to-end model, trained semi-supervised, and with no additional postprocessing. We show that our S3F model surpasses the previous state-of-the-art on various tasks, including monocular 3D reconstruction, as well as albedo and shading estimation. Moreover, we show that the proposed methodology allows novel view synthesis, relighting, and re-posing the reconstruction, and can naturally be extended to handle multiple input images (e.g. different views of a person, or the same view, in different poses, in video). Finally, we demonstrate the editing capabilities of our model for 3D virtual try-on applications.
translated by 谷歌翻译
Domain adaptation has been vastly investigated in computer vision but still requires access to target images at train time, which might be intractable in some conditions, especially for long-tail samples. In this paper, we propose the task of `Prompt-driven Zero-shot Domain Adaptation', where we adapt a model trained on a source domain using only a general textual description of the target domain, i.e., a prompt. First, we leverage a pretrained contrastive vision-language model (CLIP) to optimize affine transformations of source features, bringing them closer to target text embeddings, while preserving their content and semantics. Second, we show that augmented features can be used to perform zero-shot domain adaptation for semantic segmentation. Experiments demonstrate that our method significantly outperforms CLIP-based style transfer baselines on several datasets for the downstream task at hand. Our prompt-driven approach even outperforms one-shot unsupervised domain adaptation on some datasets, and gives comparable results on others. The code is available at https://github.com/astra-vision/PODA.
translated by 谷歌翻译
Graph learning problems are typically approached by focusing on learning the topology of a single graph when signals from all nodes are available. However, many contemporary setups involve multiple related networks and, moreover, it is often the case that only a subset of nodes is observed while the rest remain hidden. Motivated by this, we propose a joint graph learning method that takes into account the presence of hidden (latent) variables. Intuitively, the presence of the hidden nodes renders the inference task ill-posed and challenging to solve, so we overcome this detrimental influence by harnessing the similarity of the estimated graphs. To that end, we assume that the observed signals are drawn from a Gaussian Markov random field with latent variables and we carefully model the graph similarity among hidden (latent) nodes. Then, we exploit the structure resulting from the previous considerations to propose a convex optimization problem that solves the joint graph learning task by providing a regularized maximum likelihood estimator. Finally, we compare the proposed algorithm with different baselines and evaluate its performance over synthetic and real-world graphs.
translated by 谷歌翻译